one-bit supervision
- Asia > China > Anhui Province > Hefei (0.04)
- North America > Canada (0.04)
One-bit Supervision for Image Classification
This paper presents one-bit supervision, a novel setting of learning from incomplete annotations, in the scenario of image classification. Instead of training a model upon the accurate label of each sample, our setting requires the model to query with a predicted label of each sample and learn from the answer whether the guess is correct. This provides one bit (yes or no) of information, and more importantly, annotating each sample becomes much easier than finding the accurate label from many candidate classes. There are two keys to training a model upon one-bit supervision: improving the guess accuracy and making use of incorrect guesses. For these purposes, we propose a multi-stage training paradigm which incorporates negative label suppression into an off-the-shelf semi-supervised learning algorithm.
- Asia > China > Anhui Province > Hefei (0.04)
- North America > Canada (0.04)
Review for NeurIPS paper: One-bit Supervision for Image Classification
Additional Feedback: I consider this work as a new method in the context of semi-supervised learning and actively learning. Indeed, these are the two topics the authors of the paper reviewed as the related work to this work. The method essentially is yet another way to rearrange labeled samples and unlabeled samples in order to identify "active" samples to improve the learning accuracy. Thus, it is not an eye-opening, truly novel approach. I would argue that this method is incrementally novel at best.
Review for NeurIPS paper: One-bit Supervision for Image Classification
The paper proposes a new paradigm for image annotation called one-bit supervision based on questioning whether a random image belongs to a predicted category or not. Under the assumption that annotating an image with K categories is as expensive as log K annotations of the form of one-bit supervision, the paper shows that multi-stage semi-supervised learning using one-bit supervision is more effective than standard semi-supervised learning under the same annotation costs. The setup is interesting and convincing as the first step, but as the reviewers noted, the clarity of exposition and claims can improve. Also, it is worth elaborating whether you use softmax cross-entropy loss as mentioned in L112 or L2 loss in Eq. (1).
One-bit Supervision for Image Classification
This paper presents one-bit supervision, a novel setting of learning from incomplete annotations, in the scenario of image classification. Instead of training a model upon the accurate label of each sample, our setting requires the model to query with a predicted label of each sample and learn from the answer whether the guess is correct. This provides one bit (yes or no) of information, and more importantly, annotating each sample becomes much easier than finding the accurate label from many candidate classes. There are two keys to training a model upon one-bit supervision: improving the guess accuracy and making use of incorrect guesses. For these purposes, we propose a multi-stage training paradigm which incorporates negative label suppression into an off-the-shelf semi-supervised learning algorithm.